26 research outputs found

    Applying multi-criteria optimisation to develop cognitive models

    Get PDF
    A scientific theory is developed by modelling empirical data in a range of domains. The goal of developing a theory is to optimise the fit of the theory to as many experimental settings as possible, whilst retaining some qualitative properties such as `parsimony' or `comprehensibility'. We formalise the task of developing theories of human cognition as a problem in multi-criteria optimisation. There are many challenges in this task, including the representation of competing theories, coordinating the fit with multiple experiments, and bringing together competing results to provide suitable theories. Experiments demonstrate the development of a theory of categorisation, using multiple optimisation criteria in genetic algorithms to locate pareto-optimal sets

    Developing reproducible and comprehensible computational models

    Get PDF
    Quantitative predictions for complex scientific theories are often obtained by running simulations on computational models. In order for a theory to meet with wide-spread acceptance, it is important that the model be reproducible and comprehensible by independent researchers. However, the complexity of computational models can make the task of replication all but impossible. Previous authors have suggested that computer models should be developed using high-level specification languages or large amounts of documentation. We argue that neither suggestion is sufficient, as each deals with the prescriptive definition of the model, and does not aid in generalising the use of the model to new contexts. Instead, we argue that a computational model should be released as three components: (a) a well-documented implementation; (b) a set of tests illustrating each of the key processes within the model; and (c) a set of canonical results, for reproducing the model’s predictions in important experiments. The included tests and experiments would provide the concrete exemplars required for easier comprehension of the model, as well as a confirmation that independent implementations and later versions reproduce the theory’s canonical results

    CHREST tutorial: Simulations of human learning

    Get PDF
    CHREST (Chunk Hierarchy and REtrieval STructures) is a comprehensive, computational model of human learning and perception. It has been used to successfully simulate data in a variety of domains, including: the acquisition of syntactic categories, expert behaviour, concept formation, implicit learning, and the acquisition of multiple representations in physics for problem solving. The aim of this tutorial is to provide participants with an introduction to CHREST, how it can be used to model various phenomena, and the knowledge to carry out their own modelling experiments

    Multi-task learning and transfer: The effect of algorithm representation

    Get PDF
    Exploring multiple classes of learning algorithms for those algorithms which perform best in multiple tasks is a complex problem of multiple-criteria optimisation. We use a genetic algorithm to locate sets of models which are not outperformed on all of the tasks. The genetic algorithm develops a population of multiple types of learning algorithms, with competition between individuals of different types. We find that inherent differences in the convergence time and performance levels of the different algorithms leads to misleading population effects. We explore the role that the algorithm representation and initial population has on task performance. Our findings suggest that separating the representation of different algorithms is beneficial in enhancing performance. Also, initial seeding is required to avoid premature convergence to non-optimal classes of algorithms

    A distributed framework for semi-automatically developing architectures of brain and mind

    Get PDF
    Developing comprehensive theories of low-level neuronal brain processes and high-level cognitive behaviours, as well as integrating them, is an ambitious challenge that requires new conceptual, computational, and empirical tools. Given the complexities of these theories, they will almost certainly be expressed as computational systems. Here, we propose to use recent developments in grid technology to develop a system of evolutionary scientific discovery, which will (a) enable empirical researchers to make their data widely available for use in developing and testing theories, and (b) enable theorists to semi-automatically develop computational theories. We illustrate these ideas with a case study taken from the domain of categorisation

    An investigation into the effect of ageing on expert memory with CHREST

    Get PDF
    CHREST is a cognitive architecture that models human perception, learning, memory, and problem solving, and which has successfully simulated numerous human experimental data on chess. In this paper, we describe an investigation into the effects of ageing on expert memory using CHREST. The results of the simulations are related to the literature on ageing. The study illustrates how Computational Intelligence can be used to understand complex phenomena that are affected by multiple variables dynamically evolving as a function of time and that have direct practical implications for human societies
    corecore